When recently hacking on htop, there was a bug about large memory sizes being displayed wrong. But to test for it you need a process that uses 98GiB of memory. Luckily any type of memory – so virtual memory is enough. How to get a process to have an fixed, arbitrary large amount of virtual? Sure I could write a few lines of C to do it, but with numpy anyway present on my system it's way easier:
$ echo 1 | sudo tee /proc/sys/vm/overcommit_memory $ python >> import numpy >> x=numpy.empty([1024**3//8, 98]])
First we tell the kernel to always accept malloc's, even if the size is way over the available (1 = unlimited overcommitment). Then in Numpy we create an empty matrix with the right size to use the desired amount of space. Since the floats are 8 bytes large, we use 1024^3/8 as one dimension and can then set the number of GiB as the second dimension.
The advantage over a static compiled malloc in a C script is that you can change the size on the fly for free: just overwrite x with a new empty of the new desired size…
We you're finished restore default setting for overcommitment, in that the kernel will use some heuristics to determine if it should accept a memory allocation:
$ echo 0 | sudo tee /proc/sys/vm/overcommit_memory
Happy hacking.
linux virtual_memory